AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Trained on 2.5 billion data points

# Trained on 2.5 billion data points

Metaclip H14 Fullcc2.5b
MetaCLIP is a vision-language model based on CommonCrawl data, improving CLIP model performance through enhanced data filtering methods
Text-to-Image Transformers
M
facebook
26.29k
40
Metaclip B32 Fullcc2.5b
MetaCLIP is a vision-language model trained on 2.5 billion data points from CommonCrawl (CC) to construct a shared image-text embedding space.
Text-to-Image Transformers
M
facebook
413
7
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase